Inside the FDA's First Warning Letter Citing AI Misuse as a cGMP Violation

There is a sentence in a recent FDA warning letter that should stop every quality leader, AI product manager, and regulatory professional in their tracks.

When FDA investigators found that a drug manufacturer had distributed products without conducting process validation — one of the most fundamental requirements in pharmaceutical manufacturing — they asked why. The answer: the AI agent the company was using to manage its compliance activities never told them it was required.

The company in question is Purolea Cosmetics Lab, a drug and cosmetics manufacturer based in Livonia, Michigan. The FDA warning letter, issued April 2, 2026, following an inspection conducted in late October 2025, is notable for many reasons. But one section stands entirely apart from everything that came before it in FDA enforcement history.

For the first time, FDA included a dedicated heading in a warning letter titled: "Inappropriate Use of Artificial Intelligence in Pharmaceutical Manufacturing."

The enforcement phase of AI governance in regulated industries has arrived.

"The AI never told us it was required." — Purolea Cosmetics Lab's response to FDA investigators when asked why process validation had not been conducted before product distribution.

What the Warning Letter Actually Says

The Standard Violations — and Then Something New

The Purolea warning letter covers familiar ground in its first pages. FDA inspectors documented insanitary manufacturing conditions, including the presence of insects, filth, and clutter throughout the facility. They noted that the facility's loading bay door, when open, directly exposed manufacturing areas to the outside environment — a basic contamination control failure.

Three CGMP violations were cited under 21 CFR Parts 210 and 211. First, the company released finished homeopathic drug products without microbiological testing, meaning there was no scientific evidence that batches were free of objectionable microbial contamination. Second, the company failed to adequately test incoming components for identity, purity, strength, and quality, relying instead on supplier certificates of analysis without independently verifying their reliability — a practice FDA regards as inadequate supplier qualification. Third, and most fundamentally, the company's Quality Unit (QU) failed to exercise meaningful oversight over manufacturing operations: batch records were not reviewed before release, and adequate production and process controls were not established.

The letter also cited two specific drug products — "Dermveda Extra Strength Shingles Relief" and "Dermveda Extra Strength Ultra Genital Herpes Relief" — as unapproved new drugs. These products are labeled as homeopathic but are intended to treat serious conditions including shingles and genital herpes. They have no FDA-approved application in effect and cannot legally be introduced into interstate commerce. FDA was explicit that these products raise particular public health concerns given the severity of the conditions they purport to treat.

These violations are serious. But the section that will define this warning letter's legacy in regulatory history comes next.

The AI Section: Unprecedented Enforcement Language

During the inspection, the owner of Purolea told FDA investigators that the company had used AI agents to help comply with FDA regulations — specifically, to create drug product specifications, procedures, and master production or control records. The intent, presumably, was to use AI to bootstrap a compliance system that the company lacked the expertise to build from scratch.

FDA's response was direct: using AI as an aid in document creation does not transfer or dilute the Quality Unit's responsibility. The firm must review AI-generated documents to ensure they are accurate and actually compliant with cGMP. The failure to do so constitutes a violation of 21 CFR 211.22(c), which assigns quality unit responsibility for reviewing and approving all procedures and specifications.

But it was the second AI-related finding that revealed the full depth of the problem. During the inspection, FDA investigators informed the firm that it had not conducted process validation prior to distribution — a requirement under 21 CFR 211.100 that is so foundational it is taught in the first days of any pharmaceutical quality training program. The firm's response: it was not aware of the legal requirement because the AI agent it had been using never told it that process validation was required.

FDA's response in the warning letter: "Any output or recommendations from an AI agent must be reviewed and cleared by an authorized human representative of your firm's Quality Unit in accordance with section 501(a)(2)(B) of the FD&C Act. See also 21 CFR 211.22; 21 CFR 211.100."

In plain terms: a manufacturer distributed drug products intended for human use without validating its manufacturing process, and offered as its explanation that an AI tool had a knowledge gap. The AI did not know. Therefore the company did not know. And products shipped.

FDA accepted neither the explanation nor the logic.

The Gesund.ai Perspective: This Was Predictable — and Preventable

A Foreseeable Consequence of Unstructured AI Adoption

At Gesund.ai, we have argued for years that AI in regulated environments is not just a technical challenge — it is a governance challenge. The Purolea case is an extreme illustration of a dynamic that exists, in less dramatic form, across the life sciences industry right now: organizations adopting AI tools with genuine efficiency intentions but without the governance infrastructure that regulated environments require.

What makes the Purolea case instructive is not that the company was malicious. By all indications, the owner was attempting to use AI precisely because it was supposed to help with compliance. The failure was architectural: AI was deployed as a substitute for institutional knowledge and human expertise rather than as an augmentation of it. When the AI had gaps — and all AI systems have gaps — the company had no mechanism to catch them. The QU review process that should have served as a safety net did not exist in practice.

This is the core failure pattern that effective AI governance is designed to prevent. The question for the industry is not whether AI introduces risks into regulated workflows — it clearly does. The question is whether those risks are identified, controlled, and documented before a product reaches a patient, or after an FDA investigator arrives.

The Human-in-the-Loop Principle Is Not Optional

FDA's statement in the warning letter — that all AI-generated outputs used in cGMP activities must be reviewed and cleared by an authorized human representative of the Quality Unit — is not a new standard. It is an application of existing requirements to a new technology context. The regulatory logic is identical to the standard for any other document or record that enters a pharmaceutical quality system: a qualified human must review, verify, and approve it.

What changes with AI is the speed, volume, and surface area of content generation. An AI tool can produce a comprehensive-looking SOP or specification in seconds. The temptation to accept that output at face value — especially when the organization lacks deep regulatory expertise — is real. The Purolea case shows what happens when that temptation is not resisted.

The human-in-the-loop requirement is not a limitation that makes AI less useful in regulated environments. It is the design principle that makes AI usable in regulated environments at all. Without it, every AI-generated output carries unquantified risk. With it, AI becomes what it should be: a powerful force multiplier for qualified human judgment.

AI Governance Is Not a Checkbox — It Is a System

For organizations that have already internalized the human-in-the-loop principle, the deeper lesson of the Purolea case is about systemic governance. Human review is necessary but not sufficient. Effective AI governance in a regulated environment requires: knowing which AI tools are in use and for which regulated activities; maintaining documented procedures for AI tool usage; ensuring that reviewers of AI-generated outputs have the subject-matter expertise to actually evaluate them; building audit trails that capture what the AI produced, who reviewed it, what modifications were made, and when; and treating AI tools themselves as part of the supplier/vendor qualification framework.

The warning letter makes clear that FDA is applying existing quality system requirements to AI governance — not inventing new ones. That is important. It means the governance architecture already exists in cGMP-compliant organizations. What is needed is the intentional extension of that architecture to cover AI touchpoints, which in many organizations have proliferated faster than governance processes have followed.

This Did Not Come Out of Nowhere: FDA's AI Enforcement Trajectory

Industry observers have noted that the Purolea warning letter did not emerge in a regulatory vacuum. FDA has been signaling its expectations about AI in regulated settings for several years, following a pattern that the agency uses consistently: publish guidance, set expectations, then enforce.

The trajectory is traceable. In March 2023, CDER published a discussion paper on AI in drug manufacturing, opening the conversation about how existing regulatory frameworks apply to AI-assisted processes. In January 2025, FDA issued a draft guidance introducing a seven-step credibility assessment framework for AI used in regulatory decision-making. In January 2026, FDA published its Guiding Principles of Good AI Practice in Drug Development, establishing clear expectations around transparency, human oversight, and validation. Two months later, a warning letter arrived with a dedicated enforcement section on AI misuse.

The education phase is complete. The expectation-setting phase is complete. We are now in the enforcement phase.

The significance of FDA creating a dedicated heading — "Inappropriate Use of Artificial Intelligence in Pharmaceutical Manufacturing" — should not be underestimated. Regulatory enforcement headings in warning letters represent formalized, citable categories of deficiency. They signal to the entire industry that this is now a named and documented area of scrutiny. Every subsequent warning letter, Form 483 observation, and inspection conversation will be shaped by this precedent.

This is FDA's established playbook: educate, set expectations, then enforce. With the Purolea letter, the enforcement phase has opened. The question for every regulated organization is whether their AI governance maturity is inspection-ready.

Broader Implications: Beyond Pharmaceutical Manufacturing

The Principle Extends Across Every Regulated Domain

Although the Purolea warning letter specifically addresses pharmaceutical manufacturing under 21 CFR Parts 210 and 211, the underlying regulatory logic extends to every domain where FDA-regulated activities intersect with AI. Medical device developers using AI in design controls, clinical study teams using AI to draft protocols or informed consent documents, SaMD developers building AI into their products — all operate in regulatory environments that apply equivalent principles about documentation quality, human oversight, and quality unit accountability.

The FDA has been building a comprehensive AI governance framework that cuts across CDER, CDRH, and other centers. The Purolea enforcement action is the first formal citation under that framework in a pharmaceutical context, but it reflects a regulatory philosophy that will manifest in equivalent ways across the device, biologics, and digital health spaces.

The Outsourced Ecosystem Is Particularly Exposed

Under FDA's regulatory framework, manufacturers are responsible for the quality of their products regardless of agreements with contract facilities. CDMOs, contract testing laboratories, and contract packagers operate as extensions of the manufacturer. This accountability structure now intersects with AI governance in ways that existing quality agreements have not addressed.

If a contract organization uses AI tools to draft batch records, specifications, or deviation reports without disclosing this to its client or without maintaining documented human review procedures, the accountability exposure flows to both parties. Sponsor companies must now ask new questions during supplier qualification and audits: Which cGMP-relevant activities involve AI tools? What are the documented review and approval procedures for AI-generated outputs? How is AI tool use captured in the audit trail?

What About Sophisticated AI in Clinical and Diagnostic Contexts?

There is an important distinction worth drawing explicitly. The AI tools described in the Purolea case are general-purpose large language models used to generate compliance documentation — not purpose-built, validated clinical AI systems developed under controlled conditions. The Purolea case is fundamentally about the misuse of general AI tools in a regulated context, not about the risks inherent in validated clinical AI.

The distinction matters for the broader health AI ecosystem. Clinical AI systems developed under proper lifecycle governance — with documented training data, validation studies, performance benchmarks, human oversight protocols, and post-market monitoring — occupy a fundamentally different risk category than an off-the-shelf AI tool used to draft SOPs by a small manufacturer with no quality infrastructure.

The Purolea warning letter does not imply that clinical AI is inherently problematic. It implies that any AI used in a regulated context must be subject to the same governance discipline that the regulatory context demands. That principle applies equally to a simple document assistant and a sophisticated diagnostic algorithm — the governance rigor simply scales with the clinical risk.

What Regulated Organizations Should Do Now

An Immediate Action Agenda

The Purolea case provides a clear signal of what FDA investigators will look for when AI use is disclosed during inspections. Organizations using AI in any cGMP, GCP, or GLP-adjacent activities should take several immediate steps.

Inventory every AI touchpoint in regulated workflows. This includes document drafting assistants, AI-aided specification generation, deviation investigation support, CAPA tools, and any other AI-assisted activity that touches a cGMP record. The inventory should map each tool to the specific regulated activity it supports.

Establish documented QU review procedures for AI-generated outputs. Every AI-generated or AI-assisted document that enters the quality system must be reviewed and approved by a qualified human with the subject-matter expertise to evaluate its accuracy and regulatory compliance. The procedure must be written, followed, and auditable.

Train your people on what AI cannot do. The Purolea case is a textbook example of what happens when an organization treats AI output as authoritative without independent verification. Every team member who works with AI tools in a regulated context must understand that AI can have knowledge gaps, generate plausible but incorrect content, and lack awareness of the full regulatory landscape.

Update quality agreements to address AI. Sponsor-contractor relationships now need to explicitly address AI use: disclosure requirements, permitted use cases, human oversight expectations, and audit rights for AI-assisted activities.

Build the audit trail. Document which AI tools are used, for what purpose, what outputs they generated, who reviewed them, what modifications were made, and when. This is the evidence of controlled use that FDA will seek. Absence of this documentation is itself a deficiency.

Treat AI governance as part of your quality system — permanently. AI governance should not be a one-time remediation project. It should be embedded into your quality management system, your change control process, and your periodic management review agenda.

Compliance as Infrastructure: The Gesund.ai Approach

At Gesund.ai, we have built our entire platform around a conviction that is now being validated by FDA enforcement: in regulated healthcare environments, AI governance is not a layer you add on top of innovation — it is the foundation that makes innovation sustainable and defensible.

The GDAP platform — the Gesund.ai Development and Assurance Platform — was designed from the ground up to operationalize exactly the kind of human-in-the-loop, audit-ready, fully traceable AI lifecycle that the Purolea warning letter demands of any organization using AI in regulated activities. Every model output, every annotation, every validation run, and every deployment decision is logged with user IDs, timestamps, and version provenance. Human review is not optional in the platform — it is structured into the workflow. The audit trail is not reconstructed after the fact — it is built continuously, in real time.

The Purolea case involved a small manufacturer using general-purpose AI tools without governance infrastructure. But the principle it illustrates applies to every organization at every scale: clinical-grade AI requires clinical-grade governance. The sophistication of the AI system does not determine the adequacy of governance — the regulatory context does.

We believe that the most important competitive advantage available to health AI companies and regulated life sciences organizations in 2026 is not the novelty of their algorithms. It is the defensibility of their governance. The organizations that will win — commercially, clinically, and regulatorily — are the ones that have built systems where every AI decision is traceable, every human review is documented, and every compliance claim is backed by evidence.

The Purolea warning letter is a stark reminder of what the alternative looks like.

Conclusion: The Enforcement Phase Has Opened

FDA's warning letter to Purolea Cosmetics Lab is a landmark document. Not because the underlying violations are unusual — insanitary conditions, missing batch testing, inadequate quality oversight are depressingly common themes in pharmaceutical enforcement. But because for the first time, FDA has formalized AI misuse as its own named, citable cGMP deficiency.

The message is not that AI is incompatible with regulated manufacturing. AI offers genuine and substantial value in pharmaceutical and healthcare contexts — efficiency, consistency, pattern recognition, scale. FDA has been actively engaging with AI-enabled innovation across its regulatory programs for years and has expressed support for responsible AI adoption.

The message is that value without oversight is a regulatory risk. In a cGMP environment, the quality unit does not stop being accountable when an algorithm writes the document. The human-in-the-loop requirement is not a bureaucratic obstacle to AI adoption — it is the mechanism that makes AI adoption in regulated environments responsible.

The Purolea case will be cited for years. The question for every organization using AI in a regulated context is not whether FDA will notice. It is whether, when they do, there is a governed system to point to — or only an AI agent that never said it was required.

Bibliography

1. U.S. Food and Drug Administration. (2026, April 2). Warning Letter: Purolea Cosmetics Lab, MARCS-CMS 722591. Center for Drug Evaluation and Research. fda.gov.

2. Hotha, K. (2026, April 22). FDA's first cGMP enforcement action on AI misuse in drug manufacturing. BioProcess Online. bioprocessonline.com.

3. ECA Academy / GMP Compliance. (2026, April). Use of AI agents leads to the first FDA warning letter relating to AI. gmp-compliance.org.

4. Redica Systems. (2026, April). The FDA's first AI warning: over-reliance is a cGMP violation. redica.com.

5. Clarkston Consulting. (2026, April). FDA issues warning for inappropriate AI use: what pharma manufacturers need to know. clarkstonconsulting.com.

6. Regulatory Affairs Professionals Society (RAPS). (2026, April). FDA warns firm for inappropriate use of AI in drug manufacturing. raps.org.

7. ComplianceG / FICSA. (2026, April). AI in GxP: key lessons from FDA's April 2026 warning letter. complianceg.com.

8. U.S. Food and Drug Administration, CDER. (2023, March). Discussion paper: Artificial intelligence in drug manufacturing. fda.gov.

9. U.S. Food and Drug Administration. (2025, January). Draft guidance: Artificial intelligence in drug manufacturing — a framework for credibility assessment. fda.gov.

10. U.S. Food and Drug Administration. (2026, January). Guiding principles of good AI practice in drug development. fda.gov.

11. Leucine. (2025, November). 2025 FDA warning letter trends: what pharma can learn from this year's top citations. leucine.io.

12. IntuitionLabs. (2026, April). Contract manufacturing oversight: 2026 FDA enforcement data. intuitionlabs.ai.

About the Author

Gesundai Slug Author

Enes HOSGOR

CEO at Gesund.ai

Dr. Enes Hosgor is an engineer by training and an AI entrepreneur by trade driven to unlock scientific and technological breakthroughs having built AI products and companies in the last 10+ years in high compliance environments. After selling his first ML company based on his Ph.D. work at Carnegie Mellon University, he joined a digital surgery company named Caresyntax to found and lead its ML division. His penchant for healthcare comes from his family of physicians including his late father, sister and wife. Formerly a Fulbright Scholar at the University of Texas at Austin, some of his published scientific work can be found in Medical Image Analysis; International Journal of Computer Assisted Radiology and Surgery; Nature Scientific Reports, and British Journal of Surgery, among other peer-reviewed outlets.